equivariant case
- Europe > France > Île-de-France > Paris > Paris (0.04)
- North America > Canada (0.04)
A common concern from all reviewers is
We kindly thank the reviewers for their detailed reviews, valuable feedback and suggestions for improvement. Indeed, our proof of the new SW theorem relies on an "ordering" of the coordinates of arbitrary equivariant SW theorem under arbitrary finite group action would be desirable, however the proof is out of our reach as of today. In a way, this limitation is similar to the distinction between "point clouds" (which in We will add this discussion in the paper, and mention it in the abstract. In its "deep" original version, it covers all type of "Message-Passing" GNNs, but not spectral GNNs which use powers of the adjacency matrix. We will clarify this in the final version.
Characterizing the Expressive Power of Invariant and Equivariant Graph Neural Networks
Various classes of Graph Neural Networks (GNN) have been proposed and shown to be successful in a wide range of applications with graph structured data. In this paper, we propose a theoretical framework able to compare the expressive power of these GNN architectures. The current universality theorems only apply to intractable classes of GNNs. Here, we prove the first approximation guarantees for practical GNNs, paving the way for a better understanding of their generalization. Our theoretical results are proved for invariant GNNs computing a graph embedding (permutation of the nodes of the input graph does not affect the output) and equivariant GNNs computing an embedding of the nodes (permutation of the input permutes the output). We show that Folklore Graph Neural Networks (FGNN), which are tensor based GNNs augmented with matrix multiplication are the most expressive architectures proposed so far for a given tensor order. We illustrate our results on the Quadratic Assignment Problem (a NP-Hard combinatorial problem) by showing that FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms (based on spectral, SDP or other GNNs architectures). On a practical side, we also implement masked tensors to handle batches of graphs of varying sizes.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.04)
- (3 more...)
- Research Report (0.84)
- Instructional Material > Course Syllabus & Notes (0.65)
Universal Invariant and Equivariant Graph Neural Networks
Keriven, Nicolas, Peyré, Gabriel
Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or equivariant (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity and either an invariant or equivariant linear operator. Recently, Maron et al. (2019) showed that by allowing higher-order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions. Our main contribution is then an extension of this result to the equivariant case, which appears in many practical applications but has been less studied from a theoretical point of view. The proof relies on a new generalized Stone-Weierstrass theorem for algebra of equivariant functions, which is of independent interest. Finally, unlike many previous settings that consider a fixed number of nodes, our results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.